Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Calibration measures quantify how much a forecaster’s predictions violate calibration, which requires that forecasts are unbiased conditioning on the forecasted probabilities. Two important desiderata for a calibration measure are its decision-theoretic implications (i.e., downstream decision-makers that best respond to the forecasts are always no-regret) and its truthfulness (i.e., a forecaster approximately minimizes error by always reporting the true probabilities). Existing measures satisfy at most one of the properties, but not both. We introduce a new calibration measure termed subsampled step calibration, StepCEsub, that is both decision-theoretic and truthful. In particular, on any product distribution, StepCEsub is truthful up to an O(1) factor whereas prior decision-theoretic calibration measures suffer from an e−Ω(T)–Ω(T−−√) truthfulness gap. Moreover, in any smoothed setting where the conditional probability of each event is perturbed by a noise of magnitude c>0, StepCEsub is truthful up to an O(log(1/c)−−−−−−−√) factor, while prior decision-theoretic measures have an e−Ω(T)–Ω(T1/3) truthfulness gap. We also prove a general impossibility result for truthful decision-theoretic forecasting: any complete and decision-theoretic calibration measure must be discontinuous and non-truthful in the non-smoothed setting.more » « lessFree, publicly-accessible full text available June 30, 2026
-
This paper introduces the concept of leakage-robust Bayesian persuasion. Situated between public Bayesian persuasion and private Bayesian persuasion, leakage-robust persuasion considers a setting where one or more signals privately communicated by a sender to the receivers may be leaked. We study the design of leakage-robust Bayesian persuasion schemes and quantify the price of robustness using two formalisms: - The first notion, k-worst-case persuasiveness, requires a signaling scheme to remain persuasive as long as each receiver observes no more than k leaked signals from other receivers. We quantify the Price of Robust Persuasiveness (PoRPk)— i.e., the gap in sender's utility as compared to the optimal private persuasion scheme—as Θ(min{2k,n}) for supermodular sender utilities and Θ(k) for submodular or XOS sender utilities, where n is the number of receivers. This result also establishes that in some instances, Θ(log k) leakages are sufficient for the utility of the optimal leakage-robust persuasion to degenerate to that of public persuasion. - The second notion, expected downstream utility robustness, relaxes the persuasiveness requirement and instead considers the impact on sender's utility resulting from receivers best responding to their observations. By quantifying the Price of Robust Downstream Utility (PoRU) as the gap between the sender's expected utility over the randomness in the leakage pattern as compared to private persuasion, our results show that, over several natural and structured distributions of leakage patterns, PoRU improves PoRP to Θ(k) or even Θ(1), where k is the maximum number of leaked signals observable to each receiver across leakage patterns in the distribution. En route to these results, we show that subsampling and masking serve as general-purpose algorithmic paradigms for transforming any private persuasion signaling scheme to one that is leakage-robust, with minmax optimal loss in sender's utility. A full version of this paper can be found at https://arxiv.org/abs/2411.16624.more » « lessFree, publicly-accessible full text available July 7, 2026
-
Collaboration is crucial for reaching collective goals. However, its effectiveness is often undermined by the strategic behavior of individual agents -- a fact that is captured by a high Price of Stability (PoS) in recent literature [Blum et al., 2021]. Implicit in the traditional PoS analysis is the assumption that agents have full knowledge of how their tasks relate to one another. We offer a new perspective on bringing about efficient collaboration among strategic agents using information design. Inspired by the growing importance of collaboration in machine learning (such as platforms for collaborative federated learning and data cooperatives), we propose a framework where the platform has more information about how the agents' tasks relate to each other than the agents themselves. We characterize how and to what degree such platforms can leverage their information advantage to steer strategic agents toward efficient collaboration. Concretely, we consider collaboration networks where each node is a task type held by one agent, and each task benefits from contributions made in their inclusive neighborhood of tasks. This network structure is known to the agents and the platform, but only the platform knows each agent's real location -- from the agents' perspective, their location is determined by a random permutation. We employ private Bayesian persuasion and design two families of persuasive signaling schemes that the platform can use to ensure a small total workload when agents follow the signal. The first family aims to achieve the minmax optimal approximation ratio compared to the optimal collaboration, which is shown to be Θ(n‾√) for unit-weight graphs, Θ(n2/3) for graphs with constant minimum edge weights, and O(n3/4) for general weighted graphs. The second family ensures per-instance strict improvement compared to full information disclosure.more » « less
-
Collaboration is crucial for reaching collective goals. However, its effectiveness is often undermined by the strategic behavior of individual agents -- a fact that is captured by a high Price of Stability (PoS) in recent literature [Blum et al., 2021]. Implicit in the traditional PoS analysis is the assumption that agents have full knowledge of how their tasks relate to one another. We offer a new perspective on bringing about efficient collaboration among strategic agents using information design. Inspired by the growing importance of collaboration in machine learning (such as platforms for collaborative federated learning and data cooperatives), we propose a framework where the platform has more information about how the agents' tasks relate to each other than the agents themselves. We characterize how and to what degree such platforms can leverage their information advantage to steer strategic agents toward efficient collaboration. Concretely, we consider collaboration networks where each node is a task type held by one agent, and each task benefits from contributions made in their inclusive neighborhood of tasks. This network structure is known to the agents and the platform, but only the platform knows each agent's real location -- from the agents' perspective, their location is determined by a random permutation. We employ private Bayesian persuasion and design two families of persuasive signaling schemes that the platform can use to ensure a small total workload when agents follow the signal. The first family aims to achieve the minmax optimal approximation ratio compared to the optimal collaboration, which is shown to be Θ(n‾√) for unit-weight graphs, Θ(n2/3) for graphs with constant minimum edge weights, and O(n3/4) for general weighted graphs. The second family ensures per-instance strict improvement compared to full information disclosure.more » « less
-
Collaboration is crucial for reaching collective goals. However, its potential for effectiveness is often undermined by the strategic behavior of individual agents — a fact that is captured by a high Price of Stability (PoS) in recent literature [BHPS21]. Implicit in the traditional PoS analysis is the assumption that agents have full knowledge of how their tasks relate to one another. We offer a new perspective on bringing about efficient collaboration across strategic agents using information design. Inspired by the increasingly important role collaboration plays in machine learning (such as platforms for collaborative federated learning and data cooperatives), we propose a framework in which the platform possesses more information about how the agents’ tasks relate to each other than the agents themselves. Our results characterize how and to what degree such platforms can leverage their information advantage and steer strategic agents towards efficient collaboration. Concretely, we consider collaboration networks in which each node represents a task type held by one agent, and each task benefits from contributions made to the task itself and its neighboring tasks. This network structure is known to the agents and the platform. On the other hand, the real location of each agent in the network is known to the platform only — from the perspective of the agents, their location is determined by a uniformly random permutation. We employ the framework of private Bayesian persuasion and design two families of persuasive signaling schemes that the platform can use to guarantee a small total workload when agents follow the signal. The first family aims to achieve the minmax optimal approximation ratio compared to the total workload in the optimal collaboration, which is shown to be for unit-weight graphs, for graphs with edge weights lower bounded by Ω(1), and for general weighted graphs. The second family ensures per-instance strict improvement in the total workload compared to scenarios with full information disclosure.more » « less
-
We study calibration measures in a sequential prediction setup. In addition to rewarding accurate predictions (completeness) and penalizing incorrect ones (soundness), an important desideratum of calibration measures is truthfulness, a minimal condition for the forecaster not to be incentivized to exploit the system. Formally, a calibration measure is truthful if the forecaster (approximately) minimizes the expected penalty by predicting the conditional expectation of the next outcome, given the prior distribution of outcomes. We conduct a taxonomy of existing calibration measures. Perhaps surprisingly, all of them are far from being truthful. We introduce a new calibration measure termed the Subsampled Smooth Calibration Error (SSCE), which is complete and sound, and under which truthful prediction is optimal up to a constant multiplicative factor. In contrast, under existing calibration measures, there are simple distributions on which a polylogarithmic (or even zero) penalty is achievable, while truthful prediction leads to a polynomial penalty.more » « less
-
We study calibration measures in a sequential prediction setup. In addition to rewarding accurate predictions (completeness) and penalizing incorrect ones (soundness), an important desideratum of calibration measures is truthfulness, a minimal condition for the forecaster not to be incentivized to exploit the system. Formally, a calibration measure is truthful if the forecaster (approximately) minimizes the expected penalty by predicting the conditional expectation of the next outcome, given the prior distribution of outcomes. We conduct a taxonomy of existing calibration measures. Perhaps surprisingly, all of them are far from being truthful. We introduce a new calibration measure termed the Subsampled Smooth Calibration Error (SSCE), which is complete and sound, and under which truthful prediction is optimal up to a constant multiplicative factor. In contrast, under existing calibration measures, there are simple distributions on which a polylogarithmic (or even zero) penalty is achievable, while truthful prediction leads to a polynomial penalty.more » « less
An official website of the United States government

Full Text Available